skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Khan, S"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Geographic information system (GIS) based landslide susceptibility mapping is a proven methodology for understanding and forecasting infrastructure impacts during significant weather events. While researchers worldwide have increasingly applied GIS and machine learning methods to study landslide susceptibility on hillside slopes affected by geomorphological and hydrological factors, there is a noticeable lack of focus on highway slope (HWS) failures in the literature. This research addresses this gap by comprehensively evaluating HWS failure susceptibility in central Mississippi counties. The study focused on developing an inventory of HWS susceptible to failure, susceptibility mapping, and model validation using probabilistic and statistical methods. Several supervised machine learning (ML) classification models, including artificial neural networks, were compared with random forest and logistic regression to solve the classification problem of HWS failure susceptibility mapping. Various data sources were utilized to develop causative factors, including Digital Terrain Models (DTM) created from Remote Sensing methods such as satellites, drone sensors, and terrestrial LiDAR. The failed slopes investigated in this study were from four counties in central Mississippi. The resolution used was 3 ft × 3 ft per pixel, representing an area of 9 ft2 per pixel. A ratio of 1:2 was maintained between failed and non-failed areas within the study area for developing the failure susceptibility prediction models. The causative factors considered in this study encompassed geotechnical and geomorphological attributes, such as slope, aspect, curvature, elevation, normalized vegetation difference index (NDVI), soil composition, and terrain from DTM. Hydrological factors were also incorporated, including precipitation, distance from the stream, groundwater depth, and Topographic Wetness Index (TWI). These causative factors were utilized as independent features to train the classification ML models for predicting vulnerable HWS. Based on the random forest model’s classification results of failed vs. non-failed assets on the unseen data set, the influence of the features was calculated. Among the top four influencing factors, ground elevation was the highest contributing factor, followed by distance from streams, NDVI, and precipitation. The results of this study can significantly contribute to transportation agencies by offering valuable insights to target preventative maintenance efforts and mitigate catastrophic failures caused by significant rainfall and weather events on road networks and highway slopes. The findings advocate for the integration of an AI/ML-based approach within asset management programs, enabling transportation agencies to rapidly detect at-risk infrastructure. This ML-based automated detection is especially beneficial when identifying vulnerable sites before a forecasted extreme event, providing value to infrastructure resiliency efforts. 
    more » « less
    Free, publicly-accessible full text available March 2, 2026
  2. Highway slopes are susceptible to various geohazards, including landslides, rockfalls, and soil creep, necessitating early detection to minimize disruptions, prevent collisions, and ensure road safety. Conventional methods, such as visual inspections and periodic surveys, may overlook subtle changes or fail to provide timely alerts. This research aims to enhance slope movement and instability detection by leveraging advanced remote-sensing technologies such as interferometric synthetic aperture radar (InSAR), light detection and ranging (LiDAR), and uncrewed aerial vehicles (UAV). The primary objective is to develop an integrated approach combining multiple data sources to detect and predict highway slope movement effectively. InSAR offers surface deformation measurements over time, capturing nuanced slope movements, while LiDAR and UAVs provide high-resolution elevation information, including slope angles, curvature, and vegetation cover. This study explores methods to integrate these complementary data sets to validate the slope movement detection from InSAR. The research involves establishing a baseline ground motion scenario using historical open-access Sentinel-1 satellite data spanning 10 years (2018􀀐2024) for the central Mississippi region, characterized by expansive clay prone to volume changes, then comparing the ground motions with those observed from near-surface remote sensing. The baseline ground motion scenario is compared with ground truthing from near-surface remote sensing surveys conducted by LiDAR and UAV photogrammetry. The point cloud and imagery obtained from LiDAR and UAVs facilitated cross-verification and validation of the InSAR ground displacements. This study provides a comprehensive and innovative methodology for monitoring highway infrastructure using InSAR and near-surface remote sensing techniques such as LiDAR and UAV. Continuous ground motion analysis provides immediate feedback on slope performance, helping to prevent potential failures. LiDAR change detection allows for detailed evaluation of highway slopes and precise identification of potential failure locations. Integrating remote sensing techniques into geotechnical asset management programs is crucial for proactively assessing risks and enhancing highway safety and resilience. Future studies will use this data set to create finite-element-based numerical models, aiding in developing surrogate models for highway embankments based on observed InSAR ground motion patterns. This study will also serve as a foundation for future machine-learning classification models for detecting vulnerable geo-infrastructure assets. 
    more » « less
    Free, publicly-accessible full text available March 2, 2026
  3. Beauregard, Melissa S; Budge, Aaron S (Ed.)
    Soil bioengineering using Vetiver is a widely used vegetation-based slope failure mitigation technique. Though Sunshine Vetiver grass, also known as Chrysopogon zizanioides, grows 3 m in length inside the soil in tropical and subtropical climate conditions, the depth up to which Vetiver impacts the soil property has remained undetected. This study has investigated the subsurface influence zone of Vetiver grass based on nondestructive geophysical investigations Electrical Resistivity Imaging (ERI) and Multichannel Analysis of Surface Waves (MASW) in a high plasticity expansive clay soil slope in Mississippi, United States. ERI data collected on the slope revealed that the top 2 m of the high plasticity clay soil had a higher resistivity value with Vetiver (ranging from 4 to 60 􀀺m) compared to the soil without Vetiver (ranging from 2 to 28 􀀺m). MASW investigation results at the same slope have indicated a similar increase in shear wave velocity with Vetiver up to 2 m indicating enhanced soil stiffness while compared to the section without it. The combined geophysical approach using ERI and MASW reveals that the root system of the Vetiver grass enhanced the moisture content and increased the stiffness of soil within the top layers. Though the grass roots can grow more than 3 m inside the soil, the major influence was observed within the top 2 m from the slope surface. 
    more » « less
    Free, publicly-accessible full text available February 27, 2026
  4. Free, publicly-accessible full text available March 10, 2026
  5. Summary A popular method for variance reduction in causal inference is propensity-based trimming, the practice of removing units with extreme propensities from the sample. This practice has theoretical grounding when the data are homoscedastic and the propensity model is parametric (Crump et al., 2009; Yang & Ding, 2018), but in modern settings where heteroscedastic data are analysed with nonparametric models, existing theory fails to support current practice. In this work, we address this challenge by developing new methods and theory for sample trimming. Our contributions are three-fold. First, we describe novel procedures for selecting which units to trim. Our procedures differ from previous works in that we trim, not only units with small propensities, but also units with extreme conditional variances. Second, we give new theoretical guarantees for inference after trimming. In particular, we show how to perform inference on the trimmed subpopulation without requiring that our regressions converge at parametric rates. Instead, we make only fourth-root rate assumptions like those in the double machine learning literature. This result applies to conventional propensity-based trimming as well, and thus may be of independent interest. Finally, we propose a bootstrap-based method for constructing simultaneously valid confidence intervals for multiple trimmed subpopulations, which are valuable for navigating the trade-off between sample size and variance reduction inherent in trimming. We validate our methods in simulation, on the 2007–2008 National Health and Nutrition Examination Survey and on a semisynthetic Medicare dataset, and find promising results in all settings. 
    more » « less
  6. Vision-language models are integral to computer vision research, yet many high-performing models remain closed-source, obscuring their data, design and training recipe. The research community has responded by using distillation from black-box models to label training data, achieving strong benchmark results, at the cost of measurable scientific progress. However, without knowing the details of the teacher model and its data sources, scientific progress remains difficult to measure. In this paper, we study building a Perception Language Model (PLM) in a fully open and reproducible framework for transparent research in image and video understanding. We analyze standard training pipelines without distillation from proprietary models and explore large-scale synthetic data to identify critical data gaps, particularly in detailed video understanding. To bridge these gaps, we release 2.8M human-labeled instances of fine-grained video question-answer pairs and spatio-temporally grounded video captions. Additionally, we introduce PLM-VideoBench, a suite for evaluating challenging video understanding tasks focusing on the ability to reason about "what", "where", "when", and "how" of a video. We make our work fully reproducible by providing data, training recipes, code & models. 
    more » « less
    Free, publicly-accessible full text available July 23, 2026